liveness detection
Zero-to-One IDV: A Conceptual Model for AI-Powered Identity Verification
Vaidya, Aniket, Awasthi, Anurag
In today's increasingly digital interactions, robust Identity Verification (IDV) is crucial for security and trust. Artificial Intelligence (AI) is transforming IDV, enhancing accuracy and fraud detection. This paper introduces ``Zero to One,'' a holistic conceptual framework for developing AI-powered IDV products. This paper outlines the foundational problem and research objectives that necessitate a new framework for IDV in the age of AI. It details the evolution of identity verification and the current regulatory landscape to contextualize the need for a robust conceptual model. The core of the paper is the presentation of the ``Zero to One'' framework itself, dissecting its four essential components: Document Verification, Biometric Verification, Risk Assessment, and Orchestration. The paper concludes by discussing the implications of this conceptual model and suggesting future research directions focused on the framework's further development and application. The framework addresses security, privacy, UX, and regulatory compliance, offering a structured approach to building effective IDV solutions. Successful IDV platforms require a balanced conceptual understanding of verification methods, risk management, and operational scalability, with AI as a key enabler. This paper presents the ``Zero to One'' framework as a refined conceptual model, detailing verification layers, and AI's transformative role in shaping next-generation IDV products.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
A Novel Active Solution for Two-Dimensional Face Presentation Attack Detection
Identity authentication is the process of verifying one's identity. There are several identity authentication methods, among which biometric authentication is of utmost importance. Facial recognition is a sort of biometric authentication with various applications, such as unlocking mobile phones and accessing bank accounts. However, presentation attacks pose the greatest threat to facial recognition. A presentation attack is an attempt to present a non-live face, such as a photo, video, mask, and makeup, to the camera. Presentation attack detection is a countermeasure that attempts to identify between a genuine user and a presentation attack. Several industries, such as financial services, healthcare, and education, use biometric authentication services on various devices. This illustrates the significance of presentation attack detection as the verification step. In this paper, we study state-of-the-art to cover the challenges and solutions related to presentation attack detection in a single place. We identify and classify different presentation attack types and identify the state-of-the-art methods that could be used to detect each of them. We compare the state-of-the-art literature regarding attack types, evaluation metrics, accuracy, and datasets and discuss research and industry challenges of presentation attack detection. Most presentation attack detection approaches rely on extensive data training and quality, making them difficult to implement. We introduce an efficient active presentation attack detection approach that overcomes weaknesses in the existing literature. The proposed approach does not require training data, is CPU-light, can process low-quality images, has been tested with users of various ages and is shown to be user-friendly and highly robust to 2-dimensional presentation attacks.
- Europe > Finland > Northern Ostrobothnia > Oulu (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- Europe > Switzerland > Geneva > Geneva (0.04)
- Africa (0.04)
- Information Technology > Security & Privacy (1.00)
- Education > Educational Setting > Online (0.45)
Mitek pushes passwordless ID authentication with biometrics
ID verification software maker Mitek has released a passwordless authentication platform with multimodal biometrics. MiPass enables users to access digital accounts by taking a selfie and speaking a phrase with their phone, according to Mitek. The software can be embedded in applications via a dedicated software development kit. Mitek says use cases include simple account information updates, password resets, device rebinding and high-risk financial transactions. Chris Briggs, Mitek's head of products, says, "People are most loyal to companies that offer both convenience and security.
Clearview reveals biometric presentation attack detection feature, talks training and testing
A new presentation attack detection feature has been added to the Clearview Consent API from Clearview AI to allow developers to build spoof detection into identity verification solutions. Clearview Consent was launched just months ago to bring the company's facial recognition algorithms to a whole new set of use cases as a selfie biometrics tool, and the addition of presentation attack detection capabilities is the next step in its development, according to the people who made it. Clearview considered a range of approaches, and CEO Hoan Ton-That points out that developers do not typically have access to the specialized hardware behind device-based 3D biometric systems. Early engagement with Clearview Consent customers has yielded some insights into how businesses and developers plan to use it, which not only convinced the company to pursue liveness detection based on 2D images, but also imagine a range of applications. "We're looking at passive liveness video too, but some vendors have told us'We have these old profiles, and we want to find out how many of them are deepfakes and how many are presentation attacks," Clearview Ton-That tells Biometric Update in an interview.
Synthetic data getting serious for biometrics training
Synthetic data created by artificial intelligence systems, for AI systems is a growing market, as general adversarial networks (GANs) are used to train facial recognition and other biometric algorithms. The Washington Post profiles a company called Yuty, and the path it took to providing synthetic facial datasets, and reports that it is one of around 50 startups in the space. The Post notes that Gartner has forecast 60 percent of all AI training data will be synthetic by 2024. Amazon recently revealed that it relied heavily on synthetic data to train its palm biometrics. In a similar vein, OpenAI's DALL-E machine learning tool has updated a policy to allow its users to share synthetic facial images, after the tool's developers built in mechanisms to prevent its use in creating deepfakes, according to Vice.
- Information Technology > Security & Privacy (0.72)
- Media (0.54)
What will it take to stop fraud in the metaverse? - Information Age
Metaverses are on the horizon. How can we be sure the avatar with whom we're sharing intellectual property is really a genuine colleague? How can we trust that a virtual interaction with our bank manager, friend, or romantic partner isn't an interaction with a fraudster? And how can we protect our own digital identities from being stolen and used by people with nefarious intentions? In January, Meta boldly claimed to be building the world's most powerful AI supercomputer: Research SuperCluster (RSC). Set to launch in mid-2022, RSC will "help us build for the metaverse", said the company.
AI handily beats humans at biometric spoof attack detection in ID R&D research
Biometric spoofing attacks are more easily spotted by artificial intelligence-based computer systems than by people, according to new research published by ID R&D. The new report, 'Human or Machine: AI Proves Best at Spotting Biometric Attacks,' compares the relative effectiveness of humans and computers detecting presentation attacks, in terms of speed and accuracy. Liveness detection was tested against images including spoof attempts with printed photos, videos, digital images, and 2D or 3D masks, according to the announcement. The company's IDLive Face accepted 0 percent of face biometric spoofs across all types of attacks and 175,000 images. People fared far worse, failing to spot spoofs in every category, including 30 percent of photo prints, one of the easiest spoof attacks for fraudsters to carry out.
Arming yourself against deepfake technology
YouTube recently reiterated its commitment to banning deepfake videos in the US 2020 election. While this ban on technically manipulated videos of political figures isn't new and has been in place since the last presidential election in 2016, it illustrates just how increasingly difficult it is for the public (and organisations) to verify a person's true identity online. A deepfake today uses AI to combine existing imagery to replicate both their face and voice. Essentially, they can impersonate a real person, making them appear to say words they have never even spoken – hence the fear when it comes to general elections and politics being skewed by misinformed videos. Worryingly, the number of them online has doubled in less than a year, from 7,964 in December 2018 to more than 14,000 just nine months later.
- Information Technology > Security & Privacy (1.00)
- Government > Voting & Elections (1.00)
Jumio BrandVoice: Deepfakes: A Closer Look At Look-Alike Technology
In an age when Instagram filters and photoshopping have become standard, it has never been harder for organizations to verify a person's true identity online. Cybercriminals are deliberately using advanced technology to pull the wool over the eyes of organizations and defraud them. Deepfakes have recently emerged as a legitimate and scary fraud vector. A deepfake today uses AI to combine existing imagery to replace someone's likeness, closely replicating both their face and voice. Essentially, a deepfake can impersonate a real person, making them appear to say words they have never even spoken.
- North America > United States (0.15)
- Europe > Russia (0.15)
- Asia > Russia (0.15)
Arming yourself against deepfake technology
While this ban on technically manipulated videos of political figures isn't new and has been in place since the last presidential election in 2016, it illustrates just how increasingly difficult it is for the public (and organisations) to verify a person's true identity online. A deepfake today uses AI to combine existing imagery to replicate both their face and voice. Essentially, they can impersonate a real person, making them appear to say words they have never even spoken – hence the fear when it comes to general elections and politics being skewed by misinformed videos. Worryingly, the number of them online has doubled in less than a year, from 7,964 in December 2018 to more than 14,000 just nine months later. While the majority of these are porn-related, the problem isn't solely defined to this space.
- Information Technology > Security & Privacy (1.00)
- Government (1.00)